13 research outputs found

    Automated Image Analysis of High-field and Dynamic Musculoskeletal MRI

    Get PDF

    A lightweight rapid application development framework for biomedical image analysis

    Get PDF
    Biomedical imaging analysis typically comprises a variety of complex tasks requiring sophisticated algorithms and visualising high dimensional data. The successful integration and deployment of the enabling software to clinical (research) partners, for rigorous evaluation and testing, is a crucial step to facilitate adoption of research innovations within medical settings. In this paper, we introduce the Simple Medical Imaging Library Interface (SMILI), an object oriented open-source framework with a compact suite of objects geared for rapid biomedical imaging (cross-platform) application development and deployment. SMILI supports the development of both command-line (shell and Python scripting) and graphical applications utilising the same set of processing algorithms. It provides a substantial subset of features when compared to more complex packages, yet it is small enough to ship with clinical applications with limited overhead and has a license suitable for commercial use. After describing where SMILI fits within the existing biomedical imaging software ecosystem, by comparing it to other state-of-the-art offerings, we demonstrate its capabilities in creating a clinical application for manual measurement of cam-type lesions of the femoral head-neck region for the investigation of femoro-acetabular impingement (FAI) from three dimensional (3D) magnetic resonance (MR) images of the hip. This application for the investigation of FAI proved to be convenient for radiological analyses and resulted in high intra (ICC=0.97) and inter-observer (ICC=0.95) reliabilities for measurement of Ī±-angles of the femoral head-neck region. We believe that SMILI is particularly well suited for prototyping biomedical imaging applications requiring user interaction and/or visualisation of 3D mesh, scalar, vector or tensor data

    A novel mesh processing based technique for 3D plant analysis

    Get PDF
    Background: In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time.Result: In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated.Conclusion: By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features.</p

    A novel mesh processing based technique for 3D plant analysis

    No full text
    Abstract Background In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time. Result In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated. Conclusion By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features.</p

    A novel mesh processing based technique for 3D plant analysis

    Get PDF
    Background: In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time.Result: In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated.Conclusion: By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features

    Incremental shape learning of 3D surfaces of the knee, data from the osteoarthritis initiative

    No full text
    Traditional shape learning of medical image data has been implemented via Principal Component Analysis (PCA). These PCA based statistical shape models batch process all shapes at once to generate a fixed model of shape variation as principal components, which may require significant computation resources for large number of shapes. This paper applies incremental PCA (IPCA) on a dataset of 728 surfaces (derived from magnetic resonance imaging examinations displaying the articulating bones of the knee joint) that can efficiently adapt to changes in training sets. After comparing the compactness and the accuracy of shape reconstruction of both batch PCA and IPCA models, our results show that IPCA produces a model comparable to batch PCA in terms of compactness and applicability to shape reconstruction, while requiring considerably shorter processing time and computer memory for computation

    Automatic bone segmentation and bone-cartilage interface extraction for the shoulder joint from magnetic resonance images

    No full text
    We present a statistical shape model approach for automated segmentation of the proximal humerus and scapula with subsequent bone-cartilage interface (BCI) extraction from 3D magnetic resonance (MR) images of the shoulder region. Manual and automated bone segmentations from shoulder MR examinations from 25 healthy subjects acquired using steady-state free precession sequences were compared with the Dice similarity coefficient (DSC). The mean DSC scores between the manual and automated segmentations of the humerus and scapula bone volumes surrounding the BCI region were 0.926 +/- 0.050 and 0.837 +/- 0.059, respectively. The mean DSC values obtained for BCI extraction were 0.806 +/- 0.133 for the humerus and 0.795 +/- 0.117 for the scapula. The current model-based approach successfully provided automated bone segmentation and BCI extraction from MR images of the shoulder. In future work, this framework appears to provide a promising avenue for automated segmentation and quantitative analysis of cartilage in the glenohumeral joint

    Automated T2-mapping of the Menisci From Magnetic Resonance Images in Patients with Acute Knee Injury

    No full text
    Rationale and Objectives: This study aimed to evaluate the accuracy of an automated method for segmentation and T2 mapping of the medial meniscus (MM) and lateral meniscus (LM) in clinical magnetic resonance images from patients with acute knee injury

    Comparison of 3D bone models of the knee joint derived from CT and 3T MR imaging

    No full text
    To examine whether magnetic resonance (MR) imaging can offer a viable alternative to computed tomography (CT) based 3D bone modeling.CT and MR (SPACE, TrueFISP, VIBE) images were acquired from the left knee joint of a fresh-frozen cadaver. The distal femur, proximal tibia, proximal fibula and patella were manually segmented from the MR and CT examinations. The MR bone models obtained from manual segmentations of all three sequences were compared to CT models using a similarity measure based on absolute mesh differences.The average absolute distance between the CT and the various MR-based bone models were all below 1mm across all bones. The VIBE sequence provided the best agreement with the CT model, followed by the SPACE, then the TrueFISP data. The most notable difference was for the proximal tibia (VIBE 0.45mm, SPACE 0.82mm, TrueFISP 0.83mm).The study indicates that 3D MR bone models may offer a feasible alternative to traditional CT-based modeling. A single radiological examination using the MR imaging would allow simultaneous assessment of both bones and soft-tissues, providing anatomically comprehensive joint models for clinical evaluation, without the ionizing radiation of CT imaging

    Automated segmentation and T2-mapping of the posterior cruciate ligament from MRI of the knee: data from the osteoarthritis initiative

    No full text
    Segmentation and quantitative tissue evaluation of the posterior cruciate ligament (PCL) from MRI will facilitate analyses into the morphological and biochemical changes associated with various knee injuries and conditions such as osteoarthritis( OA). In this paper, we validate a multi-scale patch-based method for automated segmentation of the PCL from multi-echo spin-echo T2-map MRI of the knee acquired from 26 asymptomatic volunteers. Volume, length and T2-relaxation properties of the PCL were then estimated and validation was performed against manual segmentations of the T2-map images. We apply the method to an MR dataset of 88 patients from the osteoarthritis initiative to investigate differences in T2-properties of the PCL between knees at different stages of OA. A mean Dice's similarity coefficient of 74.4 +/- 4.2% was obtained for the PCL segmentation. Moderate and strong correlations were noted between automated and manual volume, length and median T2-values (r(V)=0.67, r(L)=0.88, r(T2)=0.78). Wilcoxon rank-sum tests showed no significant differences in length and median T2-values of the PCL between patients at variable stages of knee OA
    corecore